Memory gradient method for multiobjective optimization

نویسندگان

چکیده

In this paper, we propose a new descent method, called multiobjective memory gradient for finding Pareto critical points of optimization problem. The main thought in method is to select combination the current direction and past multi-step iterative information as search obtain stepsize by two types strategies. It proved that developed with suitable parameters always satisfies sufficient condition at each iteration. Based on mild assumptions, global convergence rates our method. Computational experiments are given demonstrate effectiveness proposed

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multiobjective Optimization Strategies for Linear Gradient Chromatography

The increase in the scale of preparative chromatographic processes for biopharmaceutical applications now necessitates the development of effective optimization strategies for large-scale processes in a manufacturing setting. The current state of the art for optimization of preparative chromatography has been limited to single objective functions. Further, there is a lack of understanding of wh...

متن کامل

Mutiple-gradient Descent Algorithm for Multiobjective Optimization

The steepest-descent method is a well-known and effective single-objective descent algorithm when the gradient of the objective function is known. Here, we propose a particular generalization of this method to multi-objective optimization by considering the concurrent minimization of n smooth criteria {J i } (i = 1,. .. , n). The novel algorithm is based on the following observation: consider a...

متن کامل

Newton's Method for Multiobjective Optimization

We propose an extension of Newton’s Method for unconstrained multiobjective optimization (multicriteria optimization). The method does not scalarize the original vector optimization problem, i.e. we do not make use of any of the classical techniques that transform a multiobjective problem into a family of standard optimization problems. Neither ordering information nor weighting factors for the...

متن کامل

Global Convergence of a Memory Gradient Method for Unconstrained Optimization

The memory gradient method is used for unconstrained optimization, especially large scale problems. The first idea of memory gradient method was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this paper, we present a new memory gradient method which generates a descent search direction for the objective function at every iteration. We show that our method converges globally...

متن کامل

A memory gradient method without line search for unconstrained optimization

Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and subsequently extended by Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Applied Mathematics and Computation

سال: 2023

ISSN: ['1873-5649', '0096-3003']

DOI: https://doi.org/10.1016/j.amc.2022.127791